59 research outputs found

    Perceptually Meaningful Image Editing: Depth

    Get PDF
    We introduce the concept of perceptually meaningful image editing and present two techniques for manipulating the apparent depth of objects in an image. The user loads an image, selects an object and specifies whether the object should appear closer or further away. The system automatically determines target values for the object and/or background that achieve the desired depth change. These depth editing operations, based on techniques used by traditional artists, manipulate either the luminance or color temperature of different regions of the image. By performing blending in the gradient domain and reconstruction with a Poisson solver, the appearance of false edges is minimized. The results of a preliminary user study, designed to evaluate the effectiveness of these techniques, are also presented

    Towards a Perception Based Image Editing System

    Get PDF
    The primary goal of this research is to develop a perception based image editing system. The input to this system will be either a rendered image, a photograph, or a high dynamic range image. We are currently developing techniques that allow the user to edit these images in a perceptually intuitive manner. Specifically we are considering the following image editing features: (1) warm - cool image adjustment, (2) intensity adjustment, (3) contrast adjustment, and (4) detail adjustment. The algorithms we are developing can be used either in an interactive editing system or for automatic image adjustment

    The Real Effect of Warm-Cool Colors

    Get PDF
    The phenomenon of warmer colors appearing nearer in depth to viewers than cooler colors has been studied extensively by psychologists and other vision researchers. The vast majority of these studies have asked human observers to view physically equidistant, colored stimuli and compare them for relative depth. However, in most cases, the stimuli presented were rather simple: straight colored lines, uniform color patches, point light sources, or symmetrical objects with uniform shading. Additionally, the colors used were typically highly saturated. Although such stimuli are useful in isolating and studying depth cues in certain contexts, they leave open the question of whether the human visual system operates similarly for realistic objects. This paper presents the results of an experiment designed to explore the color-depth relationship for realistic, colored objects with varying shading and contour

    The Effect of Object Color on Depth Ordering

    Get PDF
    The relationship between color and perceived depth for realistic, colored objects with varying shading was explored. Background: Studies have shown that warm-colored stimuli tend to appear nearer in depth than cool-colored stimuli. The majority of these studies asked human observers to view physically equidistant, colored stimuli and compare them for relative depth. However, in most cases, the stimuli presented were rather simple: straight colored lines, uniform color patches, point light sources, or symmetrical objects with uniform shading. Additionally, the colors were typically highly saturated. Although such stimuli are useful for isolating and studying depth cues in certain contexts, they leave open the question of whether the human visual system operates similarly for realistic objects. Method: Participants were presented with all possible pairs from a set of differently colored objects and were asked to select the object in each pair that appears closest to them. The objects were presented on a standard computer screen, against 4 different uniform backgrounds of varying intensity. Results: Our results show that the relative strength of color as a depth cue increases when the colored stimuli are presented against darker backgrounds and decreases when presented against lighter backgrounds. Conclusion: Color does impact our depth perception even though it is a relatively weak indicator and is not necessarily the overriding depth cue for complex, realistic objects. Application: Our observations can be used to guide the selection of color to enhance the perceived depth of objects presented on traditional display devices and newer immersive virtual environments

    Evaluating Audience Engagement of an Immersive Performance on a Virtual Stage

    Get PDF
    Presenting theatrical performances in virtual reality (VR) has been an active area of research since the early 2000\u27s. VR provides a unique form of storytelling, which is made possible through the use of physically and digitally distributed 3D worlds. We describe a methodology for determining audience engagement in a virtual theatre performance. We use a combination of galvanic skin response (GSR) data, self-reported positive and negative affect schedule (PANAS), post-viewing reflection, and a think aloud method to assess user reaction to the virtual reality experience. In this study, we combine the implicit physiological data from GSR with explicit user feedback to produce a holistic metric for assessing immersion. Although the study evaluated a particular artistic work, the methodology of the study provides a foundation for conducting similar research. The combination of PANAS, self reflection, and the think aloud in conjunction with GSR data constitutes a novel approach in the study of live performance in virtual reality. The approach is also extendable to include other implicit measures such as pulse rate, blood pressure, or eye tracking. Our case study compares the experience of viewing the performance on a computer monitor to viewing with a head mounted display. Results showed statistically significant differences based on viewing platform in the PANAS self-report metric, as well as GSR measurements. Feedback obtained via the think aloud and reflection analysis also emphasized qualitative differences between the two viewing scenarios

    Using Texture Synthesis for Non-Photorealistic Shading from Paint Samples

    Get PDF
    This paper presents several methods for shading meshes from scanned paint samples that represent dark to light transitions. Our techniques emphasize artistic control of brush stroke texture and color. We first demonstrate how the texture of the paint sample can be separated from its color gradient. We demonstrate three methods, two real-time and one off-line for producing rendered, shaded images from the texture samples. All three techniques use texture synthesis to generate additional paint samples. Finally, we develop metrics for evaluating how well each method achieves our goal in terms of texture similarity, shading correctness and temporal coherence

    Differential Privacy for Eye-Tracking Data

    Get PDF
    As large eye-tracking datasets are created, data privacy is a pressing concern for the eye-tracking community. De-identifying data does not guarantee privacy because multiple datasets can be linked for inferences. A common belief is that aggregating individuals' data into composite representations such as heatmaps protects the individual. However, we analytically examine the privacy of (noise-free) heatmaps and show that they do not guarantee privacy. We further propose two noise mechanisms that guarantee privacy and analyze their privacy-utility tradeoff. Analysis reveals that our Gaussian noise mechanism is an elegant solution to preserve privacy for heatmaps. Our results have implications for interdisciplinary research to create differentially private mechanisms for eye tracking

    EllSeg: An Ellipse Segmentation Framework for Robust Gaze Tracking

    Full text link
    Ellipse fitting, an essential component in pupil or iris tracking based video oculography, is performed on previously segmented eye parts generated using various computer vision techniques. Several factors, such as occlusions due to eyelid shape, camera position or eyelashes, frequently break ellipse fitting algorithms that rely on well-defined pupil or iris edge segments. In this work, we propose training a convolutional neural network to directly segment entire elliptical structures and demonstrate that such a framework is robust to occlusions and offers superior pupil and iris tracking performance (at least 10%\% and 24%\% increase in pupil and iris center detection rate respectively within a two-pixel error margin) compared to using standard eye parts segmentation for multiple publicly available synthetic segmentation datasets
    corecore